Goto

Collaborating Authors

 initial claim


many of the comments truly helpful to improve the quality of the paper, and some of them actually enlightened us, 2 correcting some of our initial claims that turn out to be wrong

Neural Information Processing Systems

We are very grateful to all reviewers for their detailed, insightful, and constructive comments and questions. But we believe that they are very important, and we will pursue them in our ongoing study. The column "FC" is excerpted from Our responses (blue) to reviewers' comments/questions ( black/bold/italic) are as follows. We will refine our claims, and also refer to these SA VI methods. It turns out that it was our faulty claim.


Attributing Responsibility in AI-Induced Incidents: A Computational Reflective Equilibrium Framework for Accountability

Ge, Yunfei, Zhu, Quanyan

arXiv.org Artificial Intelligence

The pervasive integration of Artificial Intelligence (AI) has introduced complex challenges in the responsibility and accountability in the event of incidents involving AI-enabled systems. The interconnectivity of these systems, ethical concerns of AI-induced incidents, coupled with uncertainties in AI technology and the absence of corresponding regulations, have made traditional responsibility attribution challenging. To this end, this work proposes a Computational Reflective Equilibrium (CRE) approach to establish a coherent and ethically acceptable responsibility attribution framework for all stakeholders. The computational approach provides a structured analysis that overcomes the limitations of conceptual approaches in dealing with dynamic and multifaceted scenarios, showcasing the framework's explainability, coherence, and adaptivity properties in the responsibility attribution process. We examine the pivotal role of the initial activation level associated with claims in equilibrium computation. Using an AI-assisted medical decision-support system as a case study, we illustrate how different initializations lead to diverse responsibility distributions. The framework offers valuable insights into accountability in AI-induced incidents, facilitating the development of a sustainable and resilient system through continuous monitoring, revision, and reflection.


Putting Context in Context: the Impact of Discussion Structure on Text Classification

Penzo, Nicolò, Longa, Antonio, Lepri, Bruno, Tonelli, Sara, Guerini, Marco

arXiv.org Artificial Intelligence

Current text classification approaches usually focus on the content to be classified. Contextual aspects (both linguistic and extra-linguistic) are usually neglected, even in tasks based on online discussions. Still in many cases the multi-party and multi-turn nature of the context from which these elements are selected can be fruitfully exploited. In this work, we propose a series of experiments on a large dataset for stance detection in English, in which we evaluate the contribution of different types of contextual information, i.e. linguistic, structural and temporal, by feeding them as natural language input into a transformer-based model. We also experiment with different amounts of training data and analyse the topology of local discussion networks in a privacy-compliant way. Results show that structural information can be highly beneficial to text classification but only under certain circumstances (e.g. depending on the amount of training data and on discussion chain complexity). Indeed, we show that contextual information on smaller datasets from other classification tasks does not yield significant improvements. Our framework, based on local discussion networks, allows the integration of structural information, while minimising user profiling, thus preserving their privacy.